100 research outputs found

    Profitable Double-Spending Attacks

    Full text link
    Our aim in this paper is to investigate the profitability of double-spending (DS) attacks that manipulate a priori mined transaction in a blockchain. Up to date, it was understood that the requirement for successful DS attacks is to occupy a higher proportion of computing power than a target network's proportion; i.e., to occupy more than 51% proportion of computing power. On the contrary, we show that DS attacks using less than 50% proportion of computing power can also be vulnerable. Namely, DS attacks using any proportion of computing power can occur as long as the chance to making a good profit is there; i.e., revenue of an attack is greater than the cost of launching it. We have novel probability theory based derivations for calculating time finite attack probability. This can be used to size up the resource needed to calculate the revenue and the cost. The results enable us to derive sufficient and necessary conditions on the value of a target transaction which make DS attacks for any proportion of computing power profitable. They can also be used to assess the risk of one's transaction by checking whether or not the transaction value satisfies the conditions for profitable DS attacks. Two examples are provided in which we evaluate the attack resources and the conditions for profitable DS attacks given 35% proportion of computing power against Syscoin and BitcoinCash networks, and quantitatively shown how vulnerable they are.Comment: 13 pages, 1 figure. Submitted to IEEE Transactions on Information Forensics and Security. Table 1 Has been correcte

    Concise Probability Distributions of Eigenvalues of Real-Valued Wishart Matrices

    Full text link
    In this paper, we consider the problem of deriving new eigenvalue distributions of real-valued Wishart matrices that arises in many scientific and engineering applications. The distributions are derived using the tools from the theory of skew symmetric matrices. In particular, we relate the multiple integrals of a determinant, which arises while finding the eigenvalue distributions, in terms of the Pfaffian of skew-symmetric matrices. Pfaffians being the square root of skew symmetric matrices are easy to compute than the conventional distributions that involve Zonal polynomials or beta integrals. We show that the plots of the derived distributions are exactly coinciding with the numerically simulated plots.Comment: Submitted to Math Journal, 7 page

    On the Compressed Measurements over Finite Fields: Sparse or Dense Sampling

    Full text link
    We consider compressed sampling over finite fields and investigate the number of compressed measurements needed for successful L0 recovery. Our results are obtained while the sparseness of the sensing matrices as well as the size of the finite fields are varied. One of interesting conclusions includes that unless the signal is "ultra" sparse, the sensing matrices do not have to be dense.Comment: 10 pages, 2 figures, other essential inf

    Detection-Directed Sparse Estimation using Bayesian Hypothesis Test and Belief Propagation

    Full text link
    In this paper, we propose a sparse recovery algorithm called detection-directed (DD) sparse estimation using Bayesian hypothesis test (BHT) and belief propagation (BP). In this framework, we consider the use of sparse-binary sensing matrices which has the tree-like property and the sampled-message approach for the implementation of BP. The key idea behind the proposed algorithm is that the recovery takes DD-estimation structure consisting of two parts: support detection and signal value estimation. BP and BHT perform the support detection, and an MMSE estimator finds the signal values using the detected support set. The proposed algorithm provides noise-robustness against measurement noise beyond the conventional MAP approach, as well as a solution to remove quantization effect by the sampled-message based BP independently of memory size for the message sampling. We explain how the proposed algorithm can have the aforementioned characteristics via exemplary discussion. In addition, our experiments validate such superiority of the proposed algorithm, compared to recent algorithms under noisy setup. Interestingly the experimental results show that performance of the proposed algorithm approaches that of the oracle estimator as SNR becomes higher

    On Detection-Directed Estimation Approach for Noisy Compressive Sensing

    Full text link
    In this paper, we investigate a Bayesian sparse reconstruction algorithm called compressive sensing via Bayesian support detection (CS-BSD). This algorithm is quite robust against measurement noise and achieves the performance of a minimum mean square error (MMSE) estimator that has support knowledge beyond a certain SNR threshold. The key idea behind CS-BSD is that reconstruction takes a detection-directed estimation structure consisting of two parts: support detection and signal value estimation. Belief propagation (BP) and a Bayesian hypothesis test perform support detection, and an MMSE estimator finds the signal values belonging to the support set. CS-BSD converges faster than other BP-based algorithms, and it can be converted to a parallel architecture to become much faster. Numerical results are provided to verify the superiority of CS-BSD compared to recent algorithms.Comment: 22 pages, 7 figures, 1 table, 1 algorithm tabl

    Bernoulli-Gaussian Approximate Message-Passing Algorithm for Compressed Sensing with 1D-Finite-Difference Sparsity

    Full text link
    This paper proposes a fast approximate message-passing (AMP) algorithm for solving compressed sensing (CS) recovery problems with 1D-finite-difference sparsity in term of MMSE estimation. The proposed algorithm, named ssAMP-BGFD, is low-computational with its fast convergence and cheap per-iteration cost, providing phase transition nearly approaching to the state-of-the-art. The proposed algorithm is originated from a sum-product message-passing rule, applying a Bernoulli-Gaussian (BG) prior, seeking an MMSE solution. The algorithm construction includes not only the conventional AMP technique for the measurement fidelity, but also suggests a simplified message-passing method to promote the signal sparsity in finite-difference. Furthermore, we provide an EM-tuning methodology to learn the BG prior parameters, suggesting how to use some practical measurement matrices satisfying the RIP requirement under the ssAMP-BGFD recovery. Extensive empirical results confirms performance of the proposed algorithm, in phase transition, convergence speed, and CPU runtime, compared to the recent algorithms.Comment: 17 pages, 13 figures, submitted to the IEEE Transactions on Signal Processin

    Restricted Isometry Random Variables: Probability Distributions, RIC Prediction and Phase Transition Analysis for Gaussian Encoders

    Full text link
    In this paper, we aim to generalize the notion of restricted isometry constant (RIC) in compressive sensing (CS) to restricted isometry random variable (RIV). Associated with a deterministic encoder there are two RICs, namely, the left and the right RIC. We show that these RICs can be generalized to a left RIV and a right RIV for an ensemble of random encoders. We derive the probability and the cumulative distribution functions of these RIVs for the most widely used i.i.d. Gaussian encoders. We also derive the asymptotic distributions of the RIVs and show that the distribution of the left RIV converges (in distribution) to the Weibull distribution, whereas that of the right RIV converges to the Gumbel distribution. By adopting the RIV framework, we bring to forefront that the current practice of using eigenvalues for RIC prediction can be improved. We show on the one hand that the eigenvalue-based approaches tend to overestimate the RICs. On the other hand, the RIV-based analysis yields precise estimates of the RICs. We also demonstrate that this precise estimation aids to improve the previous RIC-based phase transition analysis in CS.Comment: 15 pages; In revisio

    Holistic random encoding for imaging through multimode fibers

    Full text link
    The input numerical aperture (NA) of multimode fiber (MMF) can be effectively increased by placing turbid media at the input end of the MMF. This provides the potential for high-resolution imaging through the MMF. While the input NA is increased, the number of propagation modes in the MMF and hence the output NA remains the same. This makes the image reconstruction process underdetermined and may limit the quality of the image reconstruction. In this paper, we aim to improve the signal to noise ratio (SNR) of the image reconstruction in imaging through MMF. We notice that turbid media placed in the input of the MMF transforms the incoming waves into a better format for information transmission and information extraction. We call this transformation as holistic random (HR) encoding of turbid media. By exploiting the HR encoding, we make a considerable improvement on the SNR of the image reconstruction. For efficient utilization of the HR encoding, we employ sparse representation (SR), a relatively new signal reconstruction framework when it is provided with a HR encoded signal. This study shows for the first time to our knowledge the benefit of utilizing the HR encoding of turbid media for recovery in the optically underdetermined systems where the output NA of it is smaller than the input NA for imaging through MMF.Comment: under review for possible publication in Optics expres

    Intentional Aliasing Method to Improve Sub-Nyquist Sampling System

    Full text link
    A modulated wideband converter (MWC) has been introduced as a sub-Nyquist sampler that exploits a set of fast alternating pseudo random (PR) signals. Through parallel sampling branches, an MWC compresses a multiband spectrum by mixing it with PR signals in the time domain, and acquires its sub-Nyquist samples. Previously, the ratio of compression was fully dependent on the specifications of PR signals. That is, to further reduce the sampling rate without information loss, faster and longer-period PR signals were needed. However, the implementation of such PR signal generators results in high power consumption and large fabrication area. In this paper, we propose a novel aliased modulated wideband converter (AMWC), which can further reduce the sampling rate of MWC with fixed PR signals. The main idea is to induce intentional signal aliasing at the analog-to-digital converter (ADC). In addition to the first spectral compression by the signal mixer, the intentional aliasing compresses the mixed spectrum once again. We demonstrate that AMWC reduces the number of sampling branches and the rate of ADC for lossless sub-Nyquist sampling without needing to upgrade the speed or period of PR signals. Conversely, for a given fixed number of sampling branches and sampling rate, AMWC improves the performance of signal reconstruction.Comment: 13 pages with 6 figures, published in IEEE Trans. signal Proces

    Time-Variant Proof-of-Work Using Error-Correction Codes

    Full text link
    The protocol for cryptocurrencies can be divided into three parts, namely consensus, wallet, and networking overlay. The aim of the consensus part is to bring trustless rational peer-to-peer nodes to an agreement to the current status of the blockchain. The status must be updated through valid transactions. A proof-of-work (PoW) based consensus mechanism has been proven to be secure and robust owing to its simple rule and has served as a firm foundation for cryptocurrencies such as Bitcoin and Ethereum. Specialized mining devices have emerged, as rational miners aim to maximize profit, and caused two problems: i) the re-centralization of a mining market and ii) the huge energy spending in mining. In this paper, we aim to propose a new PoW called Error-Correction Codes PoW (ECCPoW) where the error-correction codes and their decoder can be utilized for PoW. In ECCPoW, puzzles can be intentionally generated to vary from block to block, leading to a time-variant puzzle generation mechanism. This mechanism is useful in repressing the emergence of the specialized mining devices. It can serve as a solution to the two problems of recentralization and energy spending.Comment: 13page
    • …
    corecore